Goto

Collaborating Authors

 transformation sequence









COGITAO: A Visual Reasoning Framework To Study Compositionality & Generalization

Taoudi-Benchekroun, Yassine, Troyan, Klim, Sager, Pascal, Gerber, Stefan, Tuggener, Lukas, Grewe, Benjamin

arXiv.org Artificial Intelligence

The ability to compose learned concepts and apply them in novel settings is key to human intelligence, but remains a persistent limitation in state-of-the-art machine learning models. To address this issue, we introduce COGITAO, a modular and extensible data generation framework and benchmark designed to systematically study compositionality and generalization in visual domains. Drawing inspiration from ARC-AGI's problem-setting, COGITAO constructs rule-based tasks which apply a set of transformations to objects in grid-like environments. It supports composition, at adjustable depth, over a set of 28 interoperable transformations, along with extensive control over grid parametrization and object properties. This flexibility enables the creation of millions of unique task rules -- surpassing concurrent datasets by several orders of magnitude -- across a wide range of difficulties, while allowing virtually unlimited sample generation per rule. We provide baseline experiments using state-of-the-art vision models, highlighting their consistent failures to generalize to novel combinations of familiar elements, despite strong in-domain performance. COGITAO is fully open-sourced, including all code and datasets, to support continued research in this field.


GPT-FT: An Efficient Automated Feature Transformation Using GPT for Sequence Reconstruction and Performance Enhancement

Gao, Yang, Wang, Dongjie, Piersall, Scott, Zhang, Ye, Wang, Liqiang

arXiv.org Artificial Intelligence

Feature transformation plays a critical role in enhancing machine learning model performance by optimizing data representations. Recent state-of-the-art approaches address this task as a continuous embedding optimization problem, converting discrete search into a learnable process. Although effective, these methods often rely on sequential encoder-decoder structures that cause high computational costs and parameter requirements, limiting scalability and efficiency. To address these limitations, we propose a novel framework that accomplishes automated feature transformation through four steps: transformation records collection, embedding space construction with a revised Generative Pre-trained Transformer (GPT) model, gradient-ascent search, and autoregressive reconstruction. In our approach, the revised GPT model serves two primary functions: (a) feature transformation sequence reconstruction and (b) model performance estimation and enhancement for downstream tasks by constructing the embedding space. Such a multi-objective optimization framework reduces parameter size and accelerates transformation processes. Experimental results on benchmark datasets show that the proposed framework matches or exceeds baseline performance, with significant gains in computational efficiency. This work highlights the potential of transformer-based architectures for scalable, high-performance automated feature transformation.


A Experiment Details

Neural Information Processing Systems

Given the differences between the training procedures of the model presented in Section 6.2, and those All models in Section 6.3 were trained with stochastic gradient descent on batches of size All models presented in this paper make use of the same 3-Layer MLP for parameterizing the encoders and decoders. This is then divided into 18 capsules, each of 18 dimensions. The decoder layers then have output sizes (450, 675, 4096). For all topographic models (TV AE and BubbleV AE) in Section 6.3, the global topographic organization afforded by These values were chosen to be sufficiently large to achieve notably lower equivariance error than the V AE baseline, and thus demonstrate the impact of topographic organization without temporal coherence. The results of all models are shown in Section B below.